# Enhanced mathematical reasoning
Deepseek R1 0528 Bf16
MIT
DeepSeek-R1-0528 is a minor version upgrade of the DeepSeek R1 model. It significantly improves the reasoning ability through increased computing resources and algorithm optimization, and performs excellently in multiple benchmark evaluations such as mathematics, programming, and general logic.
Large Language Model
Transformers

D
cognitivecomputations
129
1
Deepseek R1 0528 Qwen3 8B GPTQ Int4 Int8Mix
MIT
A quantized version model developed based on DeepSeek-R1-0528-Qwen3-8B, with significant improvements in reasoning ability and reduction of hallucination rate, suitable for various natural language processing tasks.
Large Language Model
Transformers

D
QuantTrio
154
1
Autogressive 32B
Apache-2.0
Autoregressive-32B is a Multiverse-32B baseline model built based on autoregressive modeling, providing strong support for text generation tasks.
Large Language Model
Transformers

A
Multiverse4FM
1,945
1
Phi 4 Reasoning Plus Unsloth Bnb 4bit
MIT
Phi-4-reasoning-plus is the most advanced open-weight reasoning model fine-tuned by Microsoft based on Phi-4, focusing on advanced reasoning abilities in the fields of mathematics, science, and coding.
Large Language Model
Transformers Supports Multiple Languages

P
unsloth
3,504
2
Phi 4 Reasoning Plus GGUF
MIT
Phi-4-reasoning-plus is a large language model developed by Microsoft with enhanced reasoning capabilities, specifically optimized for complex mathematical problems and multi-step reasoning tasks.
Large Language Model Supports Multiple Languages
P
lmstudio-community
5,205
4
STILL 3 1.5B Preview
STILL-3-1.5B-preview is a slow-thinking model enhanced with reinforcement learning technology, achieving 39.33% accuracy on the AIME benchmark
Large Language Model
Transformers

S
RUC-AIBOX
2,186
10
Sky T1 32B Preview GGUF
Sky-T1-32B-Preview is a 32B-parameter large language model, quantized using llama.cpp's imatrix, suitable for text generation tasks.
Large Language Model English
S
bartowski
1,069
81
Llama 3.2 Rabbit Ko 3B Instruct
Carrot Llama-3.2 Rabbit Ko is a large language model fine-tuned with instructions, supporting Korean and English, and performs excellently in text generation tasks.
Large Language Model
Safetensors Supports Multiple Languages
L
CarrotAI
2,169
9
L3.1 8B Sunfall Stheno V0.6.1
The Sunfall model is a natural language processing model developed based on Llama-3.1-8B-Stheno-v3.4, suitable for specific functions and application scenarios.
Large Language Model
Transformers

L
crestf411
183
4
Deepseek Coder V2 Lite Base AWQ
Other
DeepSeek-Coder-V2 is an open-source mixture-of-experts (MoE) code language model that can achieve performance comparable to GPT4-Turbo in specific code tasks.
Large Language Model
Transformers

D
TechxGenus
229.29k
2
Qwen2 7B Instruct
Apache-2.0
A model further fine-tuned based on Qwen2-7B-Instruct, excelling in handling complex multi-round tool/function calling tasks.
Large Language Model
Transformers Supports Multiple Languages

Q
rubra-ai
62
5
Smaug 72B V0.1
Other
The first open-source large language model to achieve an average score exceeding 80%, fine-tuned from MoMo-72B-lora-1.8.7-DPO using innovative DPO-Positive technology for preference learning optimization
Large Language Model
Transformers

S
abacusai
119
468
SOLAR Math 2x10.7b V0.2
A large language model created by merging two Solar-10.7B instruction-tuned models, with performance comparable to GPT-3.5 and Gemini Pro, surpassing Mixtral-8x7b
Large Language Model
Transformers

S
macadeliccc
92
4
Metamath Mistral 7B
Apache-2.0
MetaMath-Mistral-7B is a mathematical reasoning model fine-tuned on the MetaMathQA dataset based on the Mistral-7B model, which significantly improves the ability to solve mathematical problems.
Large Language Model
Transformers

M
meta-math
2,152
95
Wizardlm 13B V1.2
WizardLM-13B V1.2 is a large language model trained on Llama-2 13b, focusing on complex instruction-following capabilities.
Large Language Model
Transformers

W
WizardLMTeam
989
226
Featured Recommended AI Models